Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
This guide presents the "MENTORS in CS" program, a comprehensive model for providing sustained support to K-12 computer science (CS) teachers, particularly those new to the field. It details the program's foundational structures, including mentor-mentee partnerships, a community of practice, and continuous research and refinement through design-based implementation research (DBIR). The guide offers actionable insights and reproducible resources, such as program timelines, recruitment materials, and a mentor toolkit, to facilitate the replication and scaling of similar equity-driven mentoring initiatives. Key learnings regarding participant outcomes, mentor training, and adapting the program for diverse educational contexts are shared to aid widespread dissemination and impact.more » « less
-
Rapid advancements in artificial intelligence (AI) necessitate changes in what AI content is taught to K-12 students. These changes will ensure that students are prepared to be smart consumers and competent creators of AI, as well as informed citizens. To meet this need, CSTA, in partnership with AI4K12, spearheaded the Identifying AI Priorities for All K-12 Students project. The project gathered experts – including teachers, researchers, administrators, and curriculum developers – to articulate priorities for AI education. This report summarizes the result of that effort.more » « less
-
This toolkit offers a structured, year-long framework to support peer mentoring partnerships in computer science education. It provides (1) a program calendar outlining monthly activities, scheduled mentor–mentee meetings, and three mentorship cycles aligned with key CS teacher standards; (2) guided self-reflection tools to help teachers identify professional strengths and areas for growth; (3) a partnership agreement to establish norms for communication and collaboration; (4) goal-setting templates with illustrative examples to scaffold targeted professional learning; and (5) mentoring logs to document bimonthly meetings and track progress across three 2.5-month cycles.more » « less
-
Rapid changes in artificial intelligence (AI) require changes in how and what is taught about AI to K-12 students. These changes will ensure that students are prepared to be smart consumers and competent creators of AI, as well as informed citizens. To meet this need, CSTA, in partnership with AI4K12, spearheaded the Identifying AI Priorities for All K-12 Students project. The project gathered experts – including teachers, researchers, administrators, and curriculum developers – to articulate priorities for AI education. This report summarizes the result of that effort. The project had four goals: 1. Identify priorities for AI learning across each K-12 grade band. As a result of a collaborative, iterative process, the project articulated five categories for AI learning: Humans and AI, Representation and Reasoning, Machine Learning, Ethical AI System Design and Programming, and Societal Impacts of AI. 2. Suggest updates to the AI4K12 Guidelines. Advances in generative AI necessitate updates to the AI4K12 Guidelines. This is especially true for Big Idea #4: Natural Interaction, since generative AI represents a substantial advance in the ability of AI to interact with humans. Similarly, generative AI raises many ethical questions relevant to Big Idea #5: Societal Impacts. 3. Advance the research agenda for K-12 AI education. Priorities for research in AI education include the importance of supporting teachers, promoting inclusive and student-centered pedagogies, developing appropriate tools, gaining a better understanding of AI’s impact on learning, and ensuring equity in AI education. 4. Share promising practices across the AI and CS education communities. Participants shared their work in AI education. While the practices described varied, there were some common themes. Concerns about ethics and responsible AI were foregrounded, and hands-on learning activities were featured prominently. Meeting the needs of all children was a key concern, with approaches and tools that are widely accessible as well as engaging for all students. As a result of these common themes, we offer related recommendations for AI curriculum and instruction. The report also includes an exploration of the tensions and challenges that emerged from the project, such as the difficulty of categorizing, organizing, and prioritizing learning content. Preparing students to succeed personally and professionally in a world powered by computing will require rigorous, high-quality, and equitable learning opportunities in AI education. This project sought to determine priorities for AI education for all students to learn as part of a robust foundation in computer science, as well as options for more comprehensive study of AI. Within and across these priorities, two themes stand out. First, all students need to explore the personal, societal, and environmental impacts – both positive and negative – of AI. Second, students need to develop a broad conceptual understanding of how AI works: a frequent refrain from the project’s participants was that students need to understand that “AI isn’t magic.” While implementing high quality AI education, at scale, for all students will be challenging, the work already undertaken by convening participants demonstrates that there are elements of a foundation in place, one that can be built upon to ensure that all students are prepared to flourish in a world powered by computing.more » « less
-
Introduction: Learning standards are a crucial determinant of computer science (CS) education at the K-12 level, but they are not often researched despite their importance. We sought to address this gap with a mixed-methods study examining state and national K-12 CS standards. Research Question: What are the similarities and differences between state and national computer science standards? Methods: We tagged the state CS standards (n = 9695) according to their grade band/level, topic, course, and similarity to a Computer Science Teachers Association (CSTA) standard. We also analyzed the content of standards similar to CSTA standards to determine their topics, cognitive complexity, and other features. Results: We found some commonalities amidst broader diversity in approaches to organization and content across the states, relative to the CSTA standards. The content analysis showed that a common difference between state and CSTA standards is that the state standards tend to include concrete examples. We also found differences across states in how similar their standards are to CSTA standards, as well as differences in how cognitively complex the standards are. Discussion: Standards writers face many tensions and trade-offs, and this analysis shows how – in general terms – various states have chosen to manage those trade-offs in writing standards. For example, adding examples can improve clarity and specificity, but perhaps at the cost of brevity and longevity. A better understanding of the landscape of state standards can assist future standards writers, curriculum developers, and researchers in their work.more » « less
-
Introduction: Recent AI advances, particularly the introduction of large language models (LLMs), have expanded the capacity to automate various tasks, including the analysis of text. This capability may be especially helpful in education research, where lack of resources often hampers the ability to perform various kinds of analyses, particularly those requiring a high level of expertise in a domain and/or a large set of textual data. For instance, we recently coded approximately 10,000 state K-12 computer science standards, requiring over 200 hours of work by subject matter experts. If LLMs are capable of completing a task such as this, the savings in human resources would be immense. Research Questions: This study explores two research questions: (1) How do LLMs compare to humans in the performance of an education research task? and (2) What do errors in LLM performance on this task suggest about current LLM capabilities and limitations? Methodology: We used a random sample of state K-12 computer science standards. We compared the output of three LLMs – ChatGPT, Llama, and Claude – to the work of human subject matter experts in coding the relationship between each state standard and a set of national K-12 standards. Specifically, the LLMs and the humans determined whether each state standard was identical to, similar to, based on, or different from the national standards and (if it was not different) which national standard it resembled. Results: Each of the LLMs identified a different national standard than the subject matter expert in about half of instances. When the LLM identified the same standard, it usually categorized the type of relationship (i.e., identical to, similar to, based on) in the same way as the human expert. However, the LLMs sometimes misidentified ‘identical’ standards. Discussion: Our results suggest that LLMs are not currently capable of matching human performance on the task of classifying learning standards. The mis-identification of some state standards as identical to national standards – when they clearly were not – is an interesting error, given that traditional computing technologies can easily identify identical text. Similarly, some of the mismatches between the LLM and human performance indicate clear errors on the part of the LLMs. However, some of the mismatches are difficult to assess, given the ambiguity inherent in this task and the potential for human error. We conclude the paper with recommendations for the use of LLMs in education research based on these findings.more » « less
-
Introduction: State and national learning standards play an important role in articulating and standardizing K-12 computer science education. However, these standards have not been extensively researched, especially in terms of their cognitive complexity. Analyses of cognitive complexity, accomplished via comparison of standards to a taxonomy of learning, can provide an important data point for understanding the prevalence of higher-order versus lower-order thinking skills in a set of standards. Objective: The objective of this study is to answer the research question: How do state and national K-12 computer science standards compare in terms of their cognitive complexity? Methods: We used Bloom’s Revised Taxonomy in order to assess the cognitive complexity of a dataset consisting of state (n = 9695) computer science standards and the 2017 Computer Science Teachers Association (CSTA) standards (n = 120). To enable a quantitative comparison of the standards, we assigned numbers to the Bloom’s levels. Results: The CSTA standards had a higher average level of cognitive complexity than most states’ standards. States were more likely to have standards at the lowest Bloom’s level than the CSTA standards. There was wide variety of cognitive complexity by state and, within a state, there was variation by grade band. For the states, standards at the evaluate level were least common; in the CSTA standards, the remember level was least common. Discussion: While there are legitimate critiques of Bloom’s Revised Taxonomy, it may nonetheless be a useful tool for assessing learning standards, especially comparatively. Our results point to differences between and within state and national standards. Recognition of these differences and their implications can be leveraged by future standards writers, curriculum developers, and computing education researchers to craft standards that best meet the needs of all learners.more » « less
An official website of the United States government

Full Text Available